skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Roten, D"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We integrate GPU-aware MVAPICH2 in AWP-ODC, a scalable finite difference code for wave propagation in nonlinear media. On OLCF Frontier, HIP-aware MVAPICH2 yields a 17.8% T2S improvement over the non-GPU-aware version and achieves 95% parallel efficiency on 65,536 AMD MI250X GCDs. On TACC Vista, CUDA-aware MVAPICH2 delivers a 3.5% performance gain across 2-256 Nvidia GH200 GPUs, with parallel efficiencies of 82% in the linear case and 92% in the computationally more intense nonlinear case. We deploy the code for production-scale earthquake simulations on leadership-class systems 
    more » « less
    Free, publicly-accessible full text available August 20, 2026
  2. We have implemented GPU-aware support across all AWP-ODC versions and enhanced message-passing collective communications for this memory-bound finite-difference solver. This provides cutting-edge communication support for production simulations on leadership-class computing facilities, including OLCF Frontier and TACC Vista. We achieved significant performance gains, reaching 37 sustained Petaflop/s and reducing time-to-solution by 17.2% using the GPU-aware feature on 8,192 Frontier nodes, or 65,336 MI250X GCDs. The AWP-ODC code has also been optimized for TACC Vista, an Arm-based NVIDIA GH200 Grace Hopper Superchip, demonstrating excellent application performance. This poster will showcase studies and GPU performance characteristics. We will discuss our verification of GPU-aware development and the use of high-performance MVAPICH libraries, including on-the-fly compression, on modern GPU clusters. 
    more » « less
    Free, publicly-accessible full text available September 10, 2026
  3. AWP-ODC is a 4th-order finite difference code used for linear wave propagation, Iwan-type nonlinear dynamic rupture and wave propagation, and Strain Green Tensor simulation2. We have ported and verified the linear and topography version of AWP-ODC, with discontinuous mesh as well as topography, to HIP so that it can also run on AMD GPUs. The topography code achieved a 99.6% parallel efficiency on 4,096 nodes on Frontier, a Leadership Computing Facility at Oak Ridge National Laboratory. We have also implemented CUDA-aware features and on-the-fly GDR compression in the linear version of the ported HIP code. These enhancements significantly improve data transfer efficiency between GPUs, reducing communication overhead and boosting overall performance. We have also extended CUDA-aware features to the topography version and are actively working on incorporating GDR compression into this version as well. We see 154% benefits over IMPI in MVAPICH2-GDR with CUDA-aware support and on-the-fly compression for linear AWP-ODC on Lonestar-6 A100 nodes. Furthermore, we have successfully integrated a checkpointing feature into the nonlinear IWAN version of AWP-ODC, prepared for future extreme-scale simulation during Texascale Days of Frontera at TACC. 
    more » « less
  4. AWP-ODC is a 4th-order finite difference code used by the SCEC community for linear wave propagation, Iwan-type nonlinear dynamic rupture and wave propagation, and Strain Green Tensor simulation. We have ported and verified the CUDA-version of AWP-ODC-SGT, a reciprocal version used in the SCEC CyberShake project, to HIP so that it can also run on AMD GPUs. This code achieved sustained 32.6 Petaflop/s performance and 95.6% parallel efficiency at full scale on Frontier, a Leadership Computing Facility at Oak Ridge National Laboratory. The readiness of this community software on AMD Radeon Instinct GPUs and EPYC CPUs allows SCEC to take advantage of exascale systems to produce more realistic ground motions and accurate seismic hazard products. We have also deployed AWP-ODC to Azure to leverage the array of tools and services that Azure provides for tightly coupled HPC simulation on commercial cloud. We collaborated with Internet 2/Azure Accelerator supporting team, as part of Microsoft Internet2/Azure Accelerator for Research Fall 2022 Program, with Azure credits awarded through Cloudbank, an NSF-funded initiative. We demonstrate the AWP performance with a benchmark of ground motion simulation on various GPU based cloud instances, and a comparison of the cloud solution to on-premises bare-metal systems. AWP-ODC currently achieves excellent speedup and efficiency on CPU and GPU architectures. The Iwan-type dynamic rupture and wave propagation solver faces significant challenges, however, due to the increased computational workload with the number of yield surfaces chosen. Compared to linear solution, the Iwan model adds 10x-30x more computational time plus 5x-13x more memory consumption that require substantial code changes to obtain excellent performance. Supported by NSF’s Characteristic Science Applications (CSA) program for the Leadership-Class Computing Facility (LCCF) at Texas Advanced Computing Center (TACC), we are porting and improving the performance of this nonlinear AWP-ODC software, preparing for the next generation NSF LCCF system called Horizon, to be installed at TACC. During Texascale days on the current TACC’s Frontera, we carried out an Iwan-type nonlinear dynamic rupture and wave propagation simulation of a Mw7.8 scenario earthquake on the southern San Andreas fault. This simulation modeled 83 seconds of rupture with a grid spacing of 25 m to resolve frequencies up to 4 Hz with a minimum shear-wave velocity of 500 m/s. 
    more » « less